115 research outputs found

    Dynamic Bayesian Combination of Multiple Imperfect Classifiers

    Get PDF
    Classifier combination methods need to make best use of the outputs of multiple, imperfect classifiers to enable higher accuracy classifications. In many situations, such as when human decisions need to be combined, the base decisions can vary enormously in reliability. A Bayesian approach to such uncertain combination allows us to infer the differences in performance between individuals and to incorporate any available prior knowledge about their abilities when training data is sparse. In this paper we explore Bayesian classifier combination, using the computationally efficient framework of variational Bayesian inference. We apply the approach to real data from a large citizen science project, Galaxy Zoo Supernovae, and show that our method far outperforms other established approaches to imperfect decision combination. We go on to analyse the putative community structure of the decision makers, based on their inferred decision making strategies, and show that natural groupings are formed. Finally we present a dynamic Bayesian classifier combination approach and investigate the changes in base classifier performance over time.Comment: 35 pages, 12 figure

    Toward Automatic Verification of Multiagent Systems for Training Simulations

    Full text link
    Abstract. Advances in multiagent systems have led to their successful applica-tion in experiential training simulations, where students learn by interacting with agents who represent people, groups, structures, etc. These multiagent simula-tions must model the training scenario so that the students ’ success is correlated with the degree to which they follow the intended pedagogy. As these simula-tions increase in size and richness, it becomes harder to guarantee that the agents accurately encode the pedagogy. Testing with human subjects provides the most accurate feedback, but it can explore only a limited subspace of simulation paths. In this paper, we present a mechanism for using human data to verify the degree to which the simulation encodes the intended pedagogy. Starting with an analysis of data from a deployed multiagent training simulation, we then present an auto-mated mechanism for using the human data to generate a distribution appropriate for sampling simulation paths. By generalizing from a small set of human data, the automated approach can systematically explore a much larger space of possi-ble training paths and verify the degree to which a multiagent training simulation adheres to its intended pedagogy

    Species abundance dynamics under neutral assumptions: a Bayesian approach to the controversy

    Get PDF
    1. Hubbell's 'Unified Neutral Theory of Biodiversity and Biogeography' (UNTB) has generated much controversy about both the realism of its assumptions and how well it describes the species abundance dynamics in real communities. 2. We fit a discrete-time version of Hubbell's neutral model to long-term macro-moth (Lepidoptera) community data from the Rothamsted Insect Survey (RIS) light-traps network in the United Kingdom. 3. We relax the assumption of constant community size and use a hierarchical Bayesian approach to show that the model does not fit the data well as it would need parameter values that are impossible. 4. This is because the ecological communities fluctuate more than expected under neutrality. 5. The model, as presented here, can be extended to include environmental stochasticity, density-dependence, or changes in population sizes that are correlated between different species

    Markov Chain Monte Carlo Exploration of Minimal Supergravity with Implications for Dark Matter

    Full text link
    We explore the full parameter space of Minimal Supergravity (mSUGRA), allowing all four continuous parameters (the scalar mass m_0, the gaugino mass m_1/2, the trilinear coupling A_0, and the ratio of Higgs vacuum expectation values tan beta) to vary freely. We apply current accelerator constraints on sparticle and Higgs masses, and on the b -> s gamma branching ratio, and discuss the impact of the constraints on g_mu-2. To study dark matter, we apply the WMAP constraint on the cold dark matter density. We develop Markov Chain Monte Carlo (MCMC) techniques to explore the parameter regions consistent with WMAP, finding them to be considerably superior to previously used methods for exploring supersymmetric parameter spaces. Finally, we study the reach of current and future direct detection experiments in light of the WMAP constraint.Comment: 16 pages, 4 figure

    Data Analysis Challenges for the Einstein Telescope

    Full text link
    The Einstein Telescope is a proposed third generation gravitational wave detector that will operate in the region of 1 Hz to a few kHz. As well as the inspiral of compact binaries composed of neutron stars or black holes, the lower frequency cut-off of the detector will open the window to a number of new sources. These will include the end stage of inspirals, plus merger and ringdown of intermediate mass black holes, where the masses of the component bodies are on the order of a few hundred solar masses. There is also the possibility of observing intermediate mass ratio inspirals, where a stellar mass compact object inspirals into a black hole which is a few hundred to a few thousand times more massive. In this article, we investigate some of the data analysis challenges for the Einstein Telescope such as the effects of increased source number, the need for more accurate waveform models and the some of the computational issues that a data analysis strategy might face.Comment: 18 pages, Invited review for Einstein Telescope special edition of GR

    Supersymmetry Without Prejudice

    Full text link
    We begin an exploration of the physics associated with the general CP-conserving MSSM with Minimal Flavor Violation, the pMSSM. The 19 soft SUSY breaking parameters in this scenario are chosen so as to satisfy all existing experimental and theoretical constraints assuming that the WIMP is a conventional thermal relic, ie, the lightest neutralino. We scan this parameter space twice using both flat and log priors for the soft SUSY breaking mass parameters and compare the results which yield similar conclusions. Detailed constraints from both LEP and the Tevatron searches play a particularly important role in obtaining our final model samples. We find that the pMSSM leads to a much broader set of predictions for the properties of the SUSY partners as well as for a number of experimental observables than those found in any of the conventional SUSY breaking scenarios such as mSUGRA. This set of models can easily lead to atypical expectations for SUSY signals at the LHC.Comment: 61 pages, 24 figs. Refs., figs, and text added, typos fixed; This version has reduced/bitmapped figs. For a version with better figs please go to http://www.slac.stanford.edu/~rizz

    MCMC implementation for Bayesian hidden semi-Markov models with illustrative applications

    Get PDF
    Copyright © Springer 2013. The final publication is available at Springer via http://dx.doi.org/10.1007/s11222-013-9399-zHidden Markov models (HMMs) are flexible, well established models useful in a diverse range of applications. However, one potential limitation of such models lies in their inability to explicitly structure the holding times of each hidden state. Hidden semi-Markov models (HSMMs) are more useful in the latter respect as they incorporate additional temporal structure by explicit modelling of the holding times. However, HSMMs have generally received less attention in the literature, mainly due to their intensive computational requirements. Here a Bayesian implementation of HSMMs is presented. Recursive algorithms are proposed in conjunction with Metropolis-Hastings in such a way as to avoid sampling from the distribution of the hidden state sequence in the MCMC sampler. This provides a computationally tractable estimation framework for HSMMs avoiding the limitations associated with the conventional EM algorithm regarding model flexibility. Performance of the proposed implementation is demonstrated through simulation experiments as well as an illustrative application relating to recurrent failures in a network of underground water pipes where random effects are also included into the HSMM to allow for pipe heterogeneity

    Random effects diagonal metric multidimensional scaling models

    Full text link
    By assuming a distribution for the subject weights in a diagonal metric (INDSCAL) multidimensional scaling model, the subject weights become random effects. Including random effects in multidimensional scaling models offers several advantages over traditional diagonal metric models such as those fitted by the INDSCAL, ALSCAL, and other multidimensional scaling programs. Unlike traditional models, the number of parameters does not increase with the number of subjects, and, because the distribution of the subject weights is modeled, the construction of linear models of the subject weights and the testing of those models is immediate. Here we define a random effects diagonal metric multidimensional scaling model, give computational algorithms, describe our experiences with these algorithms, and provide an example illustrating the use of the model and algorithms.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/45758/1/11336_2005_Article_BF02295730.pd

    Modelling with age, period and cohort in demography

    No full text
    SIGLELD:D47909/83 / BLDSC - British Library Document Supply CentreGBUnited Kingdo
    corecore